Probability Update: Conditioning vs. Cross-Entropy

نویسندگان

  • Adam J. Grove
  • Joseph Y. Halpern
چکیده

Conditioning is the generally agreed-upon method for updating probability distributions when one learns that an event is certainly true. But it has been argued that we need other rules, in particular the rule of cross-entropy minimization, to handle updates that involve uncertain information. In this paper we re-examine such a case: van Fraassen’s Judy Benjamin problem [1987], which in essence asks how one might update given the value of a conditional probability. We argue that--contrary to the suggestions in the literature--it is possible to use simple conditionalization in this case, and thereby obtain answers that agree fully with intuition. This contrasts with proposals such as cross-entropy, which are easier to apply but can give unsatisfactory answers. Based on the lessons from this example, we speculate on some general philosophical issues concerning probability update.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

In Proceedings of the Thirteenth Annual Conference on Uncertainty in Artificial Intelligence , Providence , Rhode Island , August 1 - 3 , 1997

Conditioning is the generally agreed-upon method for updating probability distributions when one learns that an event is certainly true. But it has been argued that we need other rules, in particular the rule of cross-entropy minimization, to handle updates that involve uncertain information. In this paper we reexamine such a case: van Fraassen's Judy Benjamin problem 1987], which in essence as...

متن کامل

Stochastic Optimization and Machine Learning: Cross-Validation for Cross-Entropy Method

We explore using machine learning techniques to adaptively learn the optimal hyperparameters of a stochastic optimizer as it runs. Specifically, we investigate using multiple importance sampling to weight previously gathered samples of an objective function and combining with cross-validation to update the exploration / exploitation hyperparameter. We employ this on the Cross-Entropy method as ...

متن کامل

Gibbs Conditioning Extended, Boltzmann Conditioning Introduced

Extensions of Conditioned Weak Law of Large Numbers and Gibbs conditioning principle two probabilistic results which provide a justification of the Relative Entropy Maximization (REM/MaxEnt) method to the case of multiple REM distributions are proposed. Also, their μ-projection (Maximum Probability) alternatives are introduced.

متن کامل

Updating ACO Pheromones Using Stochastic Gradient Ascent and Cross-Entropy Methods

In this paper we introduce two systematic approaches, based on the stochastic gradient ascent algorithm and the cross-entropy method, for deriving the pheromone update rules in the Ant colony optimization metaheuristic. We discuss the relationships between the two methods as well as connections to the update rules previously proposed in the literature.

متن کامل

Modeling the Potential of Gully Erosion Occurrence Applying Shannon Entropy and Statistical Index Models in Seymareh Region

1- Introduction The gully erosion occurrence, due to the high rate of sediment production in the watershed, is one of the problems of natural resources management in the context of soil management and protection. It is known as an important signature of land degradation and forming as well as a source of sediment in a range of environments. Gully erosion often has severe environmental and econ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1997